Incorporating computed tomography (CT) reconstruction operators into differentiable pipelines has proven beneficial in many applications. Such approaches usually focus on the projection data and keep the acquisition geometry fixed. However, precise knowledge of the acquisition geometry is essential for high quality reconstruction results. In this paper, the differentiable formulation of fan-beam CT reconstruction is extended to the acquisition geometry. This allows to propagate gradient information from a loss function on the reconstructed image into the geometry parameters. As a proof-of-concept experiment, this idea is applied to rigid motion compensation. The cost function is parameterized by a trained neural network which regresses an image quality metric from the motion affected reconstruction alone. Using the proposed method, we are the first to optimize such an autofocus-inspired algorithm based on analytical gradients. The algorithm achieves a reduction in MSE by 35.5 % and an improvement in SSIM by 12.6 % over the motion affected reconstruction. Next to motion compensation, we see further use cases of our differentiable method for scanner calibration or hybrid techniques employing deep models.
translated by 谷歌翻译
Image annotation is one essential prior step to enable data-driven algorithms. In medical imaging, having large and reliably annotated data sets is crucial to recognize various diseases robustly. However, annotator performance varies immensely, thus impacts model training. Therefore, often multiple annotators should be employed, which is however expensive and resource-intensive. Hence, it is desirable that users should annotate unseen data and have an automated system to unobtrusively rate their performance during this process. We examine such a system based on whole slide images (WSIs) showing lung fluid cells. We evaluate two methods the generation of synthetic individual cell images: conditional Generative Adversarial Networks and Diffusion Models (DM). For qualitative and quantitative evaluation, we conduct a user study to highlight the suitability of generated cells. Users could not detect 52.12% of generated images by DM proofing the feasibility to replace the original cells with synthetic cells without being noticed.
translated by 谷歌翻译
出于研究目的,在发布大量此类数据集之前,胸部X光片的强大而可靠的匿名化构成了必不可少的步骤。传统的匿名过程是通过在图像中使用黑匣子中遮盖个人信息并删除或替换元信息来执行的。但是,这种简单的措施将生物识别信息保留在胸部X光片中,从而使患者可以通过连锁攻击重新识别。因此,我们看到迫切需要混淆图像中出现的生物特征识别信息。据我们所知,我们提出了第一种基于深度学习的方法,以目标匿名化胸部X光片,同时维护数据实用程序以诊断和机器学习目的。我们的模型架构是三个独立神经网络的组成,当共同使用时,它可以学习能够阻碍患者重新识别的变形场。通过消融研究研究每个组件的个体影响。 CHESTX-RAY14数据集的定量结果显示,在接收器操作特征曲线(AUC)下,患者重新识别从81.8%降低至58.6%,对异常分类性能的影响很小。这表明能够保留潜在的异常模式,同时增加患者隐私。此外,我们将提出的基于学习的深度匿名方法与差异化图像像素化进行比较,并证明了我们方法在解决胸部X光片的隐私性权衡权衡方面的优越性。
translated by 谷歌翻译
低剂量计算机断层扫描(CT)降级算法旨在使常规CT采集中的患者剂量减少,同时保持高图像质量。最近,引入了深度学习〜(DL)的方法,由于其高模型容量,因此在此任务上的常规降级算法优于常规deno。但是,为了过渡基于DL的denoing到临床实践,这些数据驱动的方法必须超越可见的训练数据来概括地概括。因此,我们提出了一种由一组可训练的联合双边滤波器(JBF)组成的混合脱糖性方法,并结合了基于卷积DL的deNoising网络,以预测指导图像。我们提出的denoising管道结合了通过基于DL的功能提取和常规JBF的可靠性启用的高模型容量。通过在没有金属植入物的腹部CT扫描上进行训练以及对金属植入物以及头部CT数据进行腹部扫描测试,可以证明该管道的概括能力。当我们的管道中嵌入两个基于DL的DENOISER(RED-CNN/QAE)时,Denoisis的性能提高了$ 10 \,\%$/$ 82 \,\%$(RMSE)和$ 3 \,\%$ /$ 81 \,\%$(psnr)在包含金属的区域和$ 6 \,\%$/$ 78 \,\%$(rmse)和$ 2 \,\%$/$ 4 \,\%$(psnr)上与各自的香草模型相比,头部CT数据。最后,提出的可训练的JBFS限制了深神经网络的误差结合,以促进基于DL的DeOisers在低剂量CT管道中的适用性。
translated by 谷歌翻译
注释数据,尤其是在医疗领域,需要专家知识和很多努力。这限制了可用医疗数据集的实验量和/或有用性。因此,发展策略以增加注释的数量,同时降低所需的域知识是感兴趣的。可能的策略是使用游戏,即即将注释任务转换为游戏。我们提出了一种方法来游戏从病理整体幻灯片图像中注释肺部流体细胞的任务。由于该域是未知的非专家注释器所知,我们将用视网网架构检测到的细胞图像到花卉图像域。使用Compygan架构执行此域传输,用于不同的小区类型。在这种更科的域名中,非专家注释器可以(t)要求在俏皮的环境中注释不同种类的花朵。为了提供概念证据,该工作表明,通过评估在真实单元图像上培训的图像分类网络并在由Cyclegan网络生成的小区图像上测试的图像分类网络可以进行域传输。分类网络分别达到原始肺液体细胞和转化肺部流体细胞的精度​​为97.48%和95.16%。通过这项研究,我们为使用自行车队进行了未来的游戏研究的基础。
translated by 谷歌翻译
We address the challenge of building domain-specific knowledge models for industrial use cases, where labelled data and taxonomic information is initially scarce. Our focus is on inductive link prediction models as a basis for practical tools that support knowledge engineers with exploring text collections and discovering and linking new (so-called open-world) entities to the knowledge graph. We argue that - though neural approaches to text mining have yielded impressive results in the past years - current benchmarks do not reflect the typical challenges encountered in the industrial wild properly. Therefore, our first contribution is an open benchmark coined IRT2 (inductive reasoning with text) that (1) covers knowledge graphs of varying sizes (including very small ones), (2) comes with incidental, low-quality text mentions, and (3) includes not only triple completion but also ranking, which is relevant for supporting experts with discovery tasks. We investigate two neural models for inductive link prediction, one based on end-to-end learning and one that learns from the knowledge graph and text data in separate steps. These models compete with a strong bag-of-words baseline. The results show a significant advance in performance for the neural approaches as soon as the available graph data decreases for linking. For ranking, the results are promising, and the neural approaches outperform the sparse retriever by a wide margin.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Efficient surrogate modelling is a key requirement for uncertainty quantification in data-driven scenarios. In this work, a novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described. The method is compared to other methods on synthetic and real data obtained from crashworthiness analyses. The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译